chore: sync spec-orchestrator from rust-template#90
Conversation
- Add team coordination tools to all agents - Update spec-orchestrator command with lifecycle fixes
Benchmark ResultsNo benchmarks configured. Add benchmarks to benches/ directory. Full results available in CI artifacts. |
Benchmark ResultsNo benchmarks configured. Add benchmarks to benches/ directory. Full results available in CI artifacts. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #90 +/- ##
=======================================
Coverage 95.83% 95.83%
=======================================
Files 9 9
Lines 6499 6499
=======================================
Hits 6228 6228
Misses 271 271 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Code Coverage ReportOverall Coverage: 0% SummaryFull HTML report available in CI artifacts. |
There was a problem hiding this comment.
Pull request overview
Syncs the upstream “spec-orchestrator” Claude command into this repo to enable parallelized spec discovery → plan synthesis → wave-based implementation, and updates local agent tool permissions to better support task/team workflows.
Changes:
- Added
.claude/commands/spec-orchestrator.mdcommand (includes jq-based inventory synthesis guidance in Phase 2). - Expanded tool access in
.claude/agents/*to include task/team messaging and task lifecycle tools.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| .claude/commands/spec-orchestrator.md | New spec orchestration procedure, including jq-based merging/summarization workflow. |
| .claude/agents/test-engineer.md | Updates description formatting; expands tools to include Task*/SendMessage/LSP. |
| .claude/agents/rust-developer.md | Updates description formatting; expands tools to include Task*/SendMessage. |
| .claude/agents/code-reviewer.md | Updates description formatting; expands tools to include editing + Task lifecycle tools. |
| jq '[.enums[] | {phase: "A", subject: "Define \(.name) enum", spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase B candidates: one task per model | ||
| jq '[.models[] | {phase: "B", subject: "Implement \(.name) model", fields: [.fields[].name], spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase C candidates: repository/data layer per model | ||
| jq '[.models[] | {phase: "C", subject: "Implement \(.name) repository and queries", spec_file: .spec_file, relationships: .relationships}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase D candidates: one task per endpoint | ||
| jq '[.endpoints[] | {phase: "D", subject: "\(.method) \(.path)", status_codes: .status_codes, error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase E candidates: business logic and cross-entity workflows | ||
| jq '[.business_logic[] | {phase: "E", subject: "Implement: \(.description)", affected_entities: .affected_entities, spec_file: .spec_file}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase F candidates: auth, middleware, cross-cutting concerns | ||
| jq '[.cross_cutting[] | {phase: "F", subject: "Implement \(.concern)", details: .details, spec_file: .spec_file}]' /tmp/discovery/merged.json | ||
|
|
||
| # Phase G candidates: integration tests per endpoint | ||
| jq '[.endpoints[] | {phase: "G", subject: "Integration tests for \(.method) \(.path)", error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json |
There was a problem hiding this comment.
The Step 4 jq task-candidate examples will error if the arrays/fields are missing or empty (e.g., .models[] when models is absent, or .fields[].name when fields is null). To make the guidance robust, use (.models // [])[] / (.enums // [])[] and extract field names via something like (.fields // [] | map(.name)).
| jq '[.enums[] | {phase: "A", subject: "Define \(.name) enum", spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | |
| # Phase B candidates: one task per model | |
| jq '[.models[] | {phase: "B", subject: "Implement \(.name) model", fields: [.fields[].name], spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | |
| # Phase C candidates: repository/data layer per model | |
| jq '[.models[] | {phase: "C", subject: "Implement \(.name) repository and queries", spec_file: .spec_file, relationships: .relationships}]' /tmp/discovery/merged.json | |
| # Phase D candidates: one task per endpoint | |
| jq '[.endpoints[] | {phase: "D", subject: "\(.method) \(.path)", status_codes: .status_codes, error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase E candidates: business logic and cross-entity workflows | |
| jq '[.business_logic[] | {phase: "E", subject: "Implement: \(.description)", affected_entities: .affected_entities, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase F candidates: auth, middleware, cross-cutting concerns | |
| jq '[.cross_cutting[] | {phase: "F", subject: "Implement \(.concern)", details: .details, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase G candidates: integration tests per endpoint | |
| jq '[.endpoints[] | {phase: "G", subject: "Integration tests for \(.method) \(.path)", error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| jq '[(.enums // [])[] | {phase: "A", subject: "Define \(.name) enum", spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | |
| # Phase B candidates: one task per model | |
| jq '[(.models // [])[] | {phase: "B", subject: "Implement \(.name) model", fields: (.fields // [] | map(.name)), spec_file: .spec_file, existing: .existing_impl}]' /tmp/discovery/merged.json | |
| # Phase C candidates: repository/data layer per model | |
| jq '[(.models // [])[] | {phase: "C", subject: "Implement \(.name) repository and queries", spec_file: .spec_file, relationships: .relationships}]' /tmp/discovery/merged.json | |
| # Phase D candidates: one task per endpoint | |
| jq '[(.endpoints // [])[] | {phase: "D", subject: "\(.method) \(.path)", status_codes: .status_codes, error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase E candidates: business logic and cross-entity workflows | |
| jq '[(.business_logic // [])[] | {phase: "E", subject: "Implement: \(.description)", affected_entities: .affected_entities, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase F candidates: auth, middleware, cross-cutting concerns | |
| jq '[(.cross_cutting // [])[] | {phase: "F", subject: "Implement \(.concern)", details: .details, spec_file: .spec_file}]' /tmp/discovery/merged.json | |
| # Phase G candidates: integration tests for each endpoint | |
| jq '[(.endpoints // [])[] | {phase: "G", subject: "Integration tests for \(.method) \(.path)", error_cases: .error_cases, spec_file: .spec_file}]' /tmp/discovery/merged.json |
|
|
||
| ### 1.4 Collect & Validate Discovery | ||
|
|
||
| After all discovery `Task` subagents return their results, use `Glob` pattern `/tmp/discovery/*.json` to verify all inventory files exist. Read only the inventory JSON files — do NOT re-read the original spec files. |
There was a problem hiding this comment.
Phase 1.4 instructs to "Read only the inventory JSON files", but Phase 2 later says to NEVER read /tmp/discovery/*.json directly with Read and to use jq instead. This is internally inconsistent and could cause context overflow if followed literally. Consider updating Phase 1.4 to also use jq (or at least Read with a small limit) and keep the guidance consistent across phases.
| After all discovery `Task` subagents return their results, use `Glob` pattern `/tmp/discovery/*.json` to verify all inventory files exist. Read only the inventory JSON files — do NOT re-read the original spec files. | |
| After all discovery `Task` subagents return their results, use `Glob` pattern `/tmp/discovery/*.json` to verify all inventory files exist. When consuming inventories, do NOT `Read` these JSON files directly in full; instead, use `jq` to extract only the specific fields or records you need from `/tmp/discovery/*.json` (or, if absolutely necessary, use `Read` with a small `limit` on the `jq`-reduced output). Do NOT re-read the original spec files. |
| total_endpoints: [.[].endpoints // [] | length] | add, | ||
| total_models: [.[].models // [] | length] | add, | ||
| total_enums: [.[].enums // [] | length] | add, | ||
| total_validation_rules: [.[].validation_rules // [] | length] | add, | ||
| total_business_logic: [.[].business_logic // [] | length] | add, | ||
| total_gaps: [.[].gaps // [] | length] | add, |
There was a problem hiding this comment.
The Step 1 jq example uses ... | add for aggregate counts. In jq, add on an empty array returns null, which can be surprising if there are zero inventory files (or if a glob expands unexpectedly). Consider using add // 0 for each count so the output is always numeric and the example is copy/paste safe.
| total_endpoints: [.[].endpoints // [] | length] | add, | |
| total_models: [.[].models // [] | length] | add, | |
| total_enums: [.[].enums // [] | length] | add, | |
| total_validation_rules: [.[].validation_rules // [] | length] | add, | |
| total_business_logic: [.[].business_logic // [] | length] | add, | |
| total_gaps: [.[].gaps // [] | length] | add, | |
| total_endpoints: [.[].endpoints // [] | length] | add // 0, | |
| total_models: [.[].models // [] | length] | add // 0, | |
| total_enums: [.[].enums // [] | length] | add // 0, | |
| total_validation_rules: [.[].validation_rules // [] | length] | add // 0, | |
| total_business_logic: [.[].business_logic // [] | length] | add // 0, | |
| total_gaps: [.[].gaps // [] | length] | add // 0, |
| Wave 2: Phase B tasks (blocked by Phase A) | ||
| Wave 3: Phase C tasks (blocked by Phase B) | ||
| ... | ||
| ``` | ||
|
|
There was a problem hiding this comment.
The git add {files_modified} and git commit -m "feat({domain}): {concise description} ..." commands interpolate file paths and other values directly into shell commands without any quoting or escaping, which can enable command injection via crafted filenames or description text containing shell metacharacters (e.g., $(...) or backticks). An attacker who introduces maliciously named files or spec-derived strings could cause arbitrary commands to execute when these Bash snippets run as written. To mitigate this, ensure file paths and dynamic text are passed to git without going through the shell (or are rigorously shell-escaped/quoted, e.g., using git add -- with safely constructed argument lists and commit messages that avoid unescaped interpolation).
Benchmark ResultsNo benchmarks configured. Add benchmarks to benches/ directory. Full results available in CI artifacts. |
Code Coverage ReportOverall Coverage: 0% SummaryFull HTML report available in CI artifacts. |
Benchmark ResultsNo benchmarks configured. Add benchmarks to benches/ directory. Full results available in CI artifacts. |
Code Coverage ReportOverall Coverage: 0% SummaryFull HTML report available in CI artifacts. |
Benchmark ResultsNo benchmarks configured. Add benchmarks to benches/ directory. Full results available in CI artifacts. |
Code Coverage ReportOverall Coverage: 0% SummaryFull HTML report available in CI artifacts. |
Summary
.claude/commands/spec-orchestrator.mdfrom rust-template upstreamTest plan